Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 5 de 5
1.
Sci Rep ; 14(1): 9133, 2024 04 21.
Article En | MEDLINE | ID: mdl-38644370

Multimedia is extensively used for educational purposes. However, certain types of multimedia lack proper design, which could impose a cognitive load on the user. Therefore, it is essential to predict cognitive load and understand how it impairs brain functioning. Participants watched a version of educational multimedia that applied Mayer's principles, followed by a version that did not. Meanwhile, their electroencephalography (EEG) was recorded. Subsequently, they participated in a post-test and completed a self-reported cognitive load questionnaire. The audio envelope and word frequency were extracted from the multimedia, and the temporal response functions (TRFs) were obtained using a linear encoding model. We observed that the behavioral data are different between the two groups and the TRFs of the two multimedia versions were different. We saw changes in the amplitude and latencies of both early and late components. In addition, correlations were found between behavioral data and the amplitude and latencies of TRF components. Cognitive load decreased participants' attention to the multimedia, and semantic processing of words also occurred with a delay and smaller amplitude. Hence, encoding models provide insights into the temporal and spatial mapping of the cognitive load activity, which could help us detect and reduce cognitive load in potential environments such as educational multimedia or simulators for different purposes.


Brain , Cognition , Electroencephalography , Multimedia , Humans , Cognition/physiology , Male , Female , Brain/physiology , Young Adult , Adult , Acoustic Stimulation , Linguistics , Attention/physiology
2.
Front Neurosci ; 16: 744737, 2022.
Article En | MEDLINE | ID: mdl-35979334

The use of multimedia learning is increasing in modern education. On the other hand, it is crucial to design multimedia contents that impose an optimal amount of cognitive load, which leads to efficient learning. Objective assessment of instantaneous cognitive load plays a critical role in educational design quality evaluation. Electroencephalography (EEG) has been considered a potential candidate for cognitive load assessment among neurophysiological methods. In this study, we experiment to collect EEG signals during a multimedia learning task and then build a model for instantaneous cognitive load measurement. In the experiment, we designed four educational multimedia in two categories to impose different levels of cognitive load by intentionally applying/violating Mayer's multimedia design principles. Thirty university students with homogenous English language proficiency participated in our experiment. We divided them randomly into two groups, and each watched a version of the multimedia followed by a recall test task and filling out a NASA-TLX questionnaire. EEG signals are collected during these tasks. To construct the load assessment model, at first, power spectral density (PSD) based features are extracted from EEG signals. Using the minimum redundancy - maximum relevance (MRMR) feature selection approach, the best features are selected. In this way, the selected features consist of only about 12% of the total number of features. In the next step, we propose a scoring model using a support vector machine (SVM) for instantaneous cognitive load assessment in 3s segments of multimedia. Our experiments indicate that the selected feature set can classify the instantaneous cognitive load with an accuracy of 84.5 ± 2.1%. The findings of this study indicate that EEG signals can be used as an appropriate tool for measuring the cognitive load introduced by educational videos. This can be help instructional designers to develop more effective content.

3.
Neural Netw ; 146: 174-180, 2022 Feb.
Article En | MEDLINE | ID: mdl-34883367

Graph construction plays an essential role in graph-based label propagation since graphs give some information on the structure of the data manifold. While most graph construction methods rely on predefined distance calculation, recent algorithms merge the task of label propagation and graph construction in a single process. Moreover, the use of several descriptors is proved to outperform a single descriptor in representing the relation between the nodes. In this article, we propose a Multiple-View Consistent Graph construction and Label propagation algorithm (MVCGL) that simultaneously constructs a consistent graph based on several descriptors and performs label propagation over unlabeled samples. Furthermore, it provides a mapping function from the feature space to the label space with which we estimate the label of unseen samples via a linear projection. The constructed graph does not rely on a predefined similarity function and exploits data and label smoothness. Experiments conducted on three face and one handwritten digit databases show that the proposed method can gain better performance compared to other graph construction and label propagation methods.


Algorithms , Data Management , Databases, Factual , Face
4.
Neural Netw ; 95: 91-101, 2017 Nov.
Article En | MEDLINE | ID: mdl-28934641

It is well known that dense coding with local bases (via Least Square coding schemes) can lead to large quantization errors or poor performances of machine learning tasks. On the other hand, sparse coding focuses on accurate representation without taking into account data locality due to its tendency to ignore the intrinsic structure hidden among the data. Local Hybrid Coding (LHC) (Xiang et al., 2014) was recently proposed as an alternative to the sparse coding scheme that is used in Sparse Representation Classifier (SRC). The LHC blends sparsity and bases-locality criteria in a unified optimization problem. It can retain the strengths of both sparsity and locality. Thus, the hybrid codes would have some advantages over both dense and sparse codes. This paper introduces a data-driven graph construction method that exploits and extends the LHC scheme. In particular, we propose a new coding scheme coined Adaptive Local Hybrid Coding (ALHC). The main contributions are as follows. First, the proposed coding scheme adaptively selects the local and non-local bases of LHC using data similarities provided by Locality-constrained Linear code. Second, the proposed ALHC exploits local similarities in its solution. Third, we use the proposed coding scheme for graph construction. For the task of graph-based label propagation, we demonstrate high classification performance of the proposed graph method on four benchmark face datasets: Extended Yale, PF01, PIE, and FERET.


Machine Learning , Least-Squares Analysis
5.
IEEE Trans Cybern ; 43(3): 921-34, 2013 Jun.
Article En | MEDLINE | ID: mdl-23144037

Local discriminant embedding (LDE) has been recently proposed to overcome some limitations of the global linear discriminant analysis method. In the case of a small training data set, however, LDE cannot directly be applied to high-dimensional data. This case is the so-called small-sample-size (SSS) problem. The classical solution to this problem was applying dimensionality reduction on the raw data (e.g., using principal component analysis). In this paper, we introduce a novel discriminant technique called "exponential LDE" (ELDE). The proposed ELDE can be seen as an extension of LDE framework in two directions. First, the proposed framework overcomes the SSS problem without discarding the discriminant information that was contained in the null space of the locality preserving scatter matrices associated with LDE. Second, the proposed ELDE is equivalent to transforming original data into a new space by distance diffusion mapping (similar to kernel-based nonlinear mapping), and then, LDE is applied in such a new space. As a result of diffusion mapping, the margin between samples belonging to different classes is enlarged, which is helpful in improving classification accuracy. The experiments are conducted on five public face databases: Yale, Extended Yale, PF01, Pose, Illumination, and Expression (PIE), and Facial Recognition Technology (FERET). The results show that the performances of the proposed ELDE are better than those of LDE and many state-of-the-art discriminant analysis techniques.


Algorithms , Artificial Intelligence , Biometry/methods , Face/anatomy & histology , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Subtraction Technique , Data Interpretation, Statistical , Discriminant Analysis , Humans
...